Conversation
|
Turing.jl documentation for PR #2789 is available at: |
Codecov Report❌ Patch coverage is
Additional details and impacted files@@ Coverage Diff @@
## main #2789 +/- ##
===========================================
- Coverage 85.23% 18.43% -66.81%
===========================================
Files 22 23 +1
Lines 1483 1671 +188
===========================================
- Hits 1264 308 -956
- Misses 219 1363 +1144 ☔ View full report in Codecov by Sentry. 🚀 New features to boost your workflow:
|
|
@penelopeysm I am running into the following issue when touching up my implementation. Suppose we run the following simple example using Random
using Turing
@model function linear_regression(x, y)
β ~ Normal(0, 1)
σ ~ truncated(Cauchy(0, 3); lower=0)
for t in eachindex(x)
y[t] ~ Normal(β * x[t], σ)
end
end
# condition the model
rng = MersenneTwister(1234)
x, y = rand(rng, 10), rand(rng, 10)
reg_model = linear_regression(x, y)
rng = MersenneTwister(1234)
particles = sample(rng, reg_model, SMC(0.5), 512, ensemble=MCMCSerial());Where I get the following error: ExceptionStackNot entirely sure why this is considering when I run this outside of the Turing environment (same exact code from |
|
First, I found that |
I fixed this locally, but forgot to commit to the PR Co-authored-by: Penelope Yong <penelopeysm@gmail.com>
|
This PR TuringLang/Libtask.jl#219 should fix the Libtask errors; with that PR + once you add the DynamicPPL imports, it runs correctly for me. If I had to take a guess, it was probably working for you before this because the variables weren't truly global (were they in a function, or some local scope?) |
Co-authored-by: Penelope Yong <penelopeysm@gmail.com>
All it took was the imports, though I appreciate the second look at Libtask. I don't think there's any global manipulation here, unless you count the task local storage trickery used by Libtask. |
|
I don't understand that at all, but if there's no error now, I guess I'll take the win. |
|
(The point about the globals was that if you run the code snippet above in the REPL, then x and y are global variables.) |
|
@charlesknipp, it would be nice if we could push this to completion if nothing major is missing. |
|
@yebai I was waiting for the new DPPL features to stabilize a bit more, so that I could properly handle Also, as you can tell, I am failing every unit test known to man. Still need to polish a bit more before comfortably merging. I should have an updated interface by the end of the week, and I'll keep you both updated with any roadblocks. |
UpdateAfter conforming to some of the new design patterns with I noticed a bug (induced sometime in this PR history) which forced the reference trajectory to always resample rendering it useless. This is now fixed, but with some additional caveats. The following will test for this pattern: # collect VNTs
rng = MersenneTwister(1234)
particles = Turing.Inference.smcsample(rng, default_model, SMC(0.5), MCMCSerial(), 2);
init_raw = map(p -> Turing.Inference.raw_from_particle(p), particles)
# set reference trajectory
state = sample(rng, particles);
ref = deepcopy(state);
# run conditional SMC
particles = Turing.Inference.smcsample(rng, default_model, SMC(0.5), MCMCSerial(), 2; ref);
raw_ref = Turing.Inference.raw_from_particle(particles.reference)
# confirm that the reference is untouched
@test raw_ref in init_rawThis fixes one aspect of PG, but breaks functionality when assigning a particle to the reference trajectory. I will keep experimenting and let you know when the PR is once again stable. |
Following the latest major release of DynamicPPL, I finally scrapped together my ideas for the long awaited restructuring of SMC within Turing.
Major Changes
TracedModelworks quite differently under the hood. While Libtask is still a central component of this mechanic, theproducecall from accumulation is delayed to ensureVarInfois caught up in terms of log-likelihoods.SMCno longer uses AbstractMCMC to interface; howeverParticleGibbsstill does.AbstractMCMCEnsemblenow interacts with SMC samplers along the reweighting step, which is designed to operate in parallel.TracedRNGhas been removed in favor of a taped global RNG manipulation to facilitate allocation efficient replayability of referenced trajectories in Particle Gibbs.TODO List
The original interface is for the most part replicated in its entirety. There are a handful of minor tweaks I need to make in order to better reinterface with some internal methods.
PartialLogDensity, or something to facilitate a partially observed modelsmcsamplewith MCMCChains/FlexiChains so that its consistent with the rest of the moduleNotes
This is all based on my self contained reimplementation here which serves more as a workspace for demonstration and experimentation. You can think of this PR as a subset of my personal repo; with that being said, I have a couple questions on proper integration:
Lastly, I would really appreciate help with the realization of particle rejuvenation. I have a demo over on TuringSMC which showcases a proof of concept.